Convolutional neural networks (CNNs) have achieved state-of-the-artperformance for automatic medical image segmentation. However, they have notdemonstrated sufficiently accurate and robust results for clinical use. Inaddition, they are limited by the lack of image-specific adaptation and thelack of generalizability to previously unseen object classes. To address theseproblems, we propose a novel deep learning-based framework for interactivesegmentation by incorporating CNNs into a bounding box and scribble-basedsegmentation pipeline. We propose image-specific fine-tuning to make a CNNmodel adaptive to a specific test image, which can be either unsupervised(without additional user interactions) or supervised (with additionalscribbles). We also propose a weighted loss function considering network andinteraction-based uncertainty for the fine-tuning. We applied this framework totwo applications: 2D segmentation of multiple organs from fetal MR slices,where only two types of these organs were annotated for training; and 3Dsegmentation of brain tumor core (excluding edema) and whole brain tumor(including edema) from different MR sequences, where only tumor cores in one MRsequence were annotated for training. Experimental results show that 1) ourmodel is more robust to segment previously unseen objects than state-of-the-artCNNs; 2) image-specific fine-tuning with the proposed weighted loss functionsignificantly improves segmentation accuracy; and 3) our method leads toaccurate results with fewer user interactions and less user time thantraditional interactive segmentation methods.
展开▼